consensus labs logo
After 6 amazing years, 50+ projects delivered, and 100+ happy clients around the globe, Doganbros has completed its mission.
We‘re leveling up as Consensus Labs Swiss! Come along for the ride and check out what‘s next at consensuslabs.ch 🚀

Installing Multiple Jibri as Docker Images — More Scallable and Managable Way of Jitsi Conference Recording

blog image 1

Why Jibri on Docker?

Jibri is the Jitsi’s componen does recording of a conference. It is possible to install a Jibri component to a single instance and conect to Jitsi. For multiple concurrent recordings you need have multiple Jibri instances running and connected to your Jitsi environment. Since Jibri is a resource consuming component in terms of CPU and memory usage, running these Jibri instances inside Docker cotainers becomes much more feasible and managable option if need to have concurrent recordings. Also if you have lots of concurrent recordings, it is not easy to managage many seperate Jibri instances handling these recordings. Dockerized Jibri installation makes much more easy to manage your Jibri instances and also you will gain some resource to be used by your Jibris instead of your operating system.I will guide you to install 6 Jibris as Docker images and also explain how to configure your Jitsi environment. I assume that you have basic understanding of Docker and Jitsi. If you don’t, dont worry. Because, by following the instructions below you still be able to set up and configure your Jibris inside Dockers for multiple concurrent recordings.

Setting up FQDN of Your Jibri Instance

Login to your Jibri VM;

Run;

sudo hostnamectl set-hostname YOUR_JIBRI_DOMAIN

Edit /etc/hostname file as;

YOUR_JIBRI_DOMAIN

Edit /etc/hosts file as;

127.0.0.1       localhost
#YOUR_LOCAL_IP_IF_BEHID_NAT  YOUR_JIBRI_DOMAIN  YOUR_HOST_NICK
#YOUR_LOCAL_IP_IF_BEHID_NAT  YOUR_JIBRI_DOMAIN  YOUR_HOST_NICK
194.146.24.85   YOUR_JIBRI_DOMAIN  YOUR_HOST_NICK
127.0.0.1       localhost         YOUR_JIBRI_DOMAIN
# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

To restart VM run;

reboot

After restart to test your FQDN setup run;

ping "$(hostname)"

Should ping 127.0.0.1 and command output will be similar to;

PING YOUR_JIBRI_DOMAIN (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.045 ms

Installing Kernel Modules ALSA Loopback

Install Extra Virtual Linux Kernel Modules;

apt update
apt install linux-image-extra-virtual

To load the ALSA loopback module into the kernel perform the following tasks as the root user;

Configure 12 capture/playback interfaces;

echo "options snd-aloop enable=1,1,1,1,1,1,1,1,1,1,1,1 index=0,1,2,3,4,5,6,7,8,9,10,11" > /etc/modprobe.d/alsa-loopback.conf

Set up the module to be loaded on boot;

echo "snd-aloop">>/etc/modules

Load the module into the running kernel;

modprobe snd-aloop

Check to see that the module is already loaded;

lsmod | grep snd_aloop

Output should be similar to;

snd_aloop              24576  0
snd_pcm                98304  1 snd_aloop
snd                    81920  3 snd_timer,snd_aloop,snd_pcm

If the output shows the snd-aloop module loaded, then the ALSA loopback configuration step is complete.

To reload the module when system restarted automatically edit

/etc/default/grub file as;

Modify the value of GRUB_DEFAULT from “0” to “1>2”

Now reboot;

reboot

Check your Loopbacks;

ls -alh /proc/asound

Output should be similar to;

dr-xr-xr-x  16 root root 0 Nov 24 13:54 .
dr-xr-xr-x 192 root root 0 Nov 24 13:54 ..
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card0
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card1
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card10
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card11
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card2
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card3
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card4
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card5
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card6
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card7
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card8
dr-xr-xr-x   6 root root 0 Nov 24 13:55 card9
-r--r--r--   1 root root 0 Nov 24 13:55 cards
-r--r--r--   1 root root 0 Nov 24 13:55 devices
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback -> card0
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_1 -> card1
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_2 -> card2
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_3 -> card3
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_4 -> card4
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_5 -> card5
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_6 -> card6
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_7 -> card7
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_8 -> card8
lrwxrwxrwx   1 root root 5 Nov 24 13:55 Loopback_9 -> card9
lrwxrwxrwx   1 root root 6 Nov 24 13:55 Loopback_A -> card10
lrwxrwxrwx   1 root root 6 Nov 24 13:55 Loopback_B -> card11
-r--r--r--   1 root root 0 Nov 24 13:55 modules

Installing Docker Environment

System needs to be updated to make it safer and reliable to install Docker. Run;

sudo apt update
sudo apt upgrade

Note: To be safe keep current configuration files if it is asked during system update.

Once we have updated the system, we need to install some necessary packages before we are ready to install Docker.

sudo apt-get install curl apt-transport-https ca-certificates software-properties-common

_Note: Packages installed above;

  • apt-transport-https : lets the package manager transfer files and data over https
  • ca-certificates : lets the web browser and system check security certificates
  • curl : transfers data
  • software-properties-common : adds scripts to manage the software_

Adding Docker repositories;

Add PGP Key;

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add repository;

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update repository info;

sudo apt update

Make sure you are installing from the Docker repo instead of the default Ubuntu repo with this command;

apt-cache policy docker-ce

A correct output will look like the following with different version numbers (go to the first lines of the output):

docker-ce:
  Installed: (none)
  Candidate: 5:19.03.13~3-0~ubuntu-bionic
  Version table:
     5:19.03.13~3-0~ubuntu-bionic 500
        500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages

As you can see, docker-ce is not installed, so we can move on to the next step.

Install Docker;

sudo apt install docker-ce

Check Docker Status;

sudo systemctl status docker

Install Docker Compose

apt install docker-compose

Installing Jibri Dockers

Get Jibri Docker container;

docker pull jitsi/jibri

Creating Jibri Docker configuration file and directories;

cd &&
mkdir jibri-docker &&
cd jibri-docker &&
touch .env &&
touch jibri.yml &&
mkdir config &&
cd config &&
touch .asoundrc1 &&
touch .asoundrc2 &&
touch .asoundrc3 &&
touch .asoundrc4 &&
touch .asoundrc5 &&
touch .asoundrc6 &&
cd ..

Edit /root/jibri-docker/.env file;

File content will be exactly as follows;

# JIBRI CONFIG
# Public URL of your Jitsi
PUBLIC_URL=YOUR_JITSI_DOMAIN
# Internal XMPP domain for authenticated services
XMPP_AUTH_DOMAIN=auth.YOUR_JITSI_DOMAIN
# XMPP domain for the internal MUC used for jibri, jigasi and jvb pools
XMPP_INTERNAL_MUC_DOMAIN=internal.auth.YOUR_JITSI_DOMAIN
# XMPP domain for the jibri recorder
XMPP_RECORDER_DOMAIN=recorder.YOUR_JITSI_DOMAIN
# Internal XMPP server
XMPP_SERVER=YOUR_JITSI_DOMAIN
# Internal XMPP domain
XMPP_DOMAIN=YOUR_JITSI_DOMAIN
# XMPP user for Jibri client connections
JIBRI_XMPP_USER=jibri
# XMPP password for Jibri client connections
JIBRI_XMPP_PASSWORD=YOUR_JIBRI_USER_PASSWORD
# MUC name for the Jibri pool
JIBRI_BREWERY_MUC=jibribrewery
# XMPP recorder user for Jibri client connections
JIBRI_RECORDER_USER=recorder
# XMPP recorder password for Jibri client connections
JIBRI_RECORDER_PASSWORD=YOUR_RECORDER_USER_PASSWORD
# Directory for recordings inside Jibri container
JIBRI_RECORDING_DIR=/config/recordings
# The finalizing script. Will run after recording is complete
JIBRI_FINALIZE_RECORDING_SCRIPT_PATH=/config/finalize.sh
 
# When jibri gets a request to start a service for a room, the room
# jid will look like: roomName@optional.prefixes.subdomain.xmpp_domain
# We'll build the url for the call by transforming that into:
# https://xmpp_domain/subdomain/roomName
# So if there are any prefixes in the jid (like jitsi meet, which
# has its participants join a muc at conference.xmpp_domain) then
# list that prefix here so it can be stripped out to generate
# the call url correctly
JIBRI_STRIP_DOMAIN_JID=conference
# Directory for logs inside Jibri container
JIBRI_LOGS_DIR=/config/logs
DISPLAY=:0=

Edit /root/jibri-docker/jibri.yml file (Configuration file for Docker images);

nano /root/jibri-docker/jibri.yml

File content will be as follows for 2 Docker Jibri images (To add more Jibri Docker images just copy and paste Jibri service block (Jibri1 or 2) and change red colored numbers.)

version: '3'
services:
    jibri1:
        image: jitsi/jibri
        restart: ${RESTART_POLICY}
        volumes:
            - ${CONFIG}/jibri1:/config:Z
            - /dev/shm:/dev/shm
            - /root/jibri-docker/config/.asoundrc1:/home/jibri/.asoundrc
            - /root/jibri-docker/recordings:/config/recordings
        cap_add:
            - SYS_ADMIN
            - NET_BIND_SERVICE
        devices:
            - /dev/snd:/dev/snd
        environment:
            - XMPP_AUTH_DOMAIN
            - XMPP_INTERNAL_MUC_DOMAIN
            - XMPP_RECORDER_DOMAIN
            - XMPP_SERVER
            - XMPP_DOMAIN
            - JIBRI_XMPP_USER
            - JIBRI_XMPP_PASSWORD
            - JIBRI_BREWERY_MUC
            - JIBRI_RECORDER_USER
            - JIBRI_RECORDER_PASSWORD
            - JIBRI_RECORDING_DIR
            - JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
            - JIBRI_STRIP_DOMAIN_JID
            - JIBRI_LOGS_DIR
            - DISPLAY=:0
            - TZ
    jibri2:
        image: jitsi/jibri
        restart: ${RESTART_POLICY}
        volumes:
            - ${CONFIG}/jibri2:/config:Z
            - /dev/shm:/dev/shm
            - /root/jibri-docker/config/.asoundrc2:/home/jibri/.asoundrc
            - /root/jibri-docker/recordings:/config/recordings
        cap_add:
            - SYS_ADMIN
            - NET_BIND_SERVICE
        devices:
            - /dev/snd:/dev/snd
        environment:
            - XMPP_AUTH_DOMAIN
            - XMPP_INTERNAL_MUC_DOMAIN
            - XMPP_RECORDER_DOMAIN
            - XMPP_SERVER
            - XMPP_DOMAIN
            - JIBRI_XMPP_USER
            - JIBRI_XMPP_PASSWORD
            - JIBRI_BREWERY_MUC
            - JIBRI_RECORDER_USER
            - JIBRI_RECORDER_PASSWORD
            - JIBRI_RECORDING_DIR
            - JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
            - JIBRI_STRIP_DOMAIN_JID
            - JIBRI_LOGS_DIR
            - DISPLAY=:0
            - TZ

Edit /root/jibri-docker/config/.asoundrcX files (X is the number for each container i.e. for Jibri container 1 config file is .asoundrc1 )

_Note: Each Jibri Docker image is using 2 ALSA loopback for recording. This seems a new update in Jibri Docker image configuration. To do so, for each .asoundrc file there will be 2 Loopback couples( Loopback and Loopback_1, Loopback_2 and Loopback_3, Loopback_4 and Loopback_5 etc..). Loopback naming convention starts with Loopback and goes on as; Loopback_1, Loopback_2…Loopback_9, Loopback_A, Loopback_B, Loopback_C etc… _ For now 12 Loopbacks are defined in the system which means 6 Jibri Dockers running concurrently. 32 loopbacks can be defined as a maximum limit of ALSA. So first 2 .asoundrc files will be as follows. Others should be configured respectively.

Content of the /root/jibri-docker/config/.asoundrc1 file (which will be used by Jibri Docker instance 1) will be as follows.

pcm.amix {
  type dmix
  ipc_key 219345
  slave.pcm "hw:Loopback,0,0"
}
pcm.asnoop {
  type dsnoop
  ipc_key 219346
  slave.pcm "hw:Loopback_1,1,0"
}
pcm.aduplex {
  type asym
  playback.pcm "amix"
  capture.pcm "asnoop"
}
pcm.bmix {
  type dmix
  ipc_key 219347
  slave.pcm "hw:Loopback_1,0,0"
}
pcm.bsnoop {
  type dsnoop
  ipc_key 219348
  slave.pcm "hw:Loopback,1,0"
}
pcm.bduplex {
  type asym
  playback.pcm "bmix"
  capture.pcm "bsnoop"
}
pcm.pjsua {
  type plug
  slave.pcm "bduplex"
}
pcm.!default {
  type plug
  slave.pcm "aduplex"
}

Content of the /root/jibri-docker/config/.asoundrc2 file (which will be used by Jibri Docker instance 2) will be as follows.

pcm.amix {
  type dmix
  ipc_key 219345
  slave.pcm "hw:Loopback_2,0,0"
}
pcm.asnoop {
  type dsnoop
  ipc_key 219346
  slave.pcm "hw:Loopback_3,1,0"
}
pcm.aduplex {
  type asym
  playback.pcm "amix"
  capture.pcm "asnoop"
}
pcm.bmix {
  type dmix
  ipc_key 219347
  slave.pcm "hw:Loopback_3,0,0"
}
pcm.bsnoop {
  type dsnoop
  ipc_key 219348
  slave.pcm "hw:Loopback_2,1,0"
}
pcm.bduplex {
  type asym
  playback.pcm "bmix"
  capture.pcm "bsnoop"
}
pcm.pjsua {
  type plug
  slave.pcm "bduplex"
}
pcm.!default {
  type plug
  slave.pcm "aduplex"
}

To up Jibri Docker containers;

cd /root/jibri-docker
docker-compose -f jibri.yml up -d

Note: If you’d like the container to restart on reboots or crashes: find the container ID with docker ps -a and use it with docker update –restart unless-stopped CONTAINER ID

To list running Dockers;

docker ps

To down your Jibri Dockers;

cd /root/jibri-docker
docker-compose -f jibri.yml down

Testing

Start a new meeting as a room owner. Open “more” menu with three dots in the right-down corner. There click start recording. Start up to 6 new recordings until system resources handle.

Open “more” menu again with three dots in the right-down corner. There click stop recording.

Log in to your Jibri server as root. You can find recorded MP4 video files inside /root/jibri-docker/recordings directory. For each recording session, a directory named with session id is created by Jibri. You can find recorded MP4 videos under these directories.

Now you have your new Docker Jibri instances ready for recording. And if you need support for Jitsi, do not hesitate to contact us. We are giving professional grade Jitsi consultation service including installation, integration, customisation and maintenance support.

Doganbros Logo

© Doganbros 2024